bias bounty
Bound by the Bounty: Collaboratively Shaping Evaluation Processes for Queer AI Harms
QueerInAI, Organizers of, Dennler, Nathan, Ovalle, Anaelia, Singh, Ashwin, Soldaini, Luca, Subramonian, Arjun, Tu, Huy, Agnew, William, Ghosh, Avijit, Yee, Kyra, Peradejordi, Irene Font, Talat, Zeerak, Russo, Mayra, Pinhal, Jess de Jesus de Pinho
Bias evaluation benchmarks and dataset and model documentation have emerged as central processes for assessing the biases and harms of artificial intelligence (AI) systems. However, these auditing processes have been criticized for their failure to integrate the knowledge of marginalized communities and consider the power dynamics between auditors and the communities. Consequently, modes of bias evaluation have been proposed that engage impacted communities in identifying and assessing the harms of AI systems (e.g., bias bounties). Even so, asking what marginalized communities want from such auditing processes has been neglected. In this paper, we ask queer communities for their positions on, and desires from, auditing processes. To this end, we organized a participatory workshop to critique and redesign bias bounties from queer perspectives. We found that when given space, the scope of feedback from workshop participants goes far beyond what bias bounties afford, with participants questioning the ownership, incentives, and efficacy of bounties. We conclude by advocating for community ownership of bounties and complementing bounties with participatory processes (e.g., co-creation).
- North America > United States > California > Los Angeles County > Los Angeles (0.14)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- North America > United States > New York > New York County > New York City (0.05)
- (18 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.34)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (0.93)
Council Post: Responsible AI Comes Of Age (And Customers Love It)
Linh C. Ho has held executive leadership roles for a number of global tech companies and currently serves as Chief Growth Officer at Zelros. It is no surprise that technology as ubiquitous as artificial intelligence (AI) would eventually require ethical guardrails. Just this past fall, the White House announced a Blueprint for an AI Bill of Rights. In it, the administration proposes a five-part framework for companies using automated systems in their operations: effective and safe systems; data privacy; protections against algorithmic discrimination; notice and explanation; and human alternatives, consideration and fallback. Together, the five principles in the Bill of Rights form an overlapping set of backstops--safeguards intended to help keep the American public free from any harm caused by the unchecked use of AI and other emerging technologies.
- North America > United States (1.00)
- North America > Canada (0.05)
- Europe > Germany (0.05)
- Europe > France (0.05)
A bias bounty for AI will help to catch unfair algorithms faster
Today a group of AI and machine-learning experts are launching a new bias bounty competition, which they hope will speed the process of uncovering these kinds of embedded prejudice. The competition, which takes inspiration from bug bounties in cybersecurity, calls on participants to create tools to identify and mitigate algorithmic biases in AI models. It's being organized by a group of volunteers who work at companies like Twitter, software company Splunk, and deepfake detection startup Reality Defender. They've dubbed themselves the "Bias Buccaneers." The first bias bounty competition is going to focus on biased image detection.
How Ethical AI Is Redefining Data Strategy
Industries such as insurance that handle personal information are paying more attention to customers' desire for responsible, transparent AI. AI (artificial intelligence) is a tremendous asset to companies that use predictive modeling and have automated tasks. However, AI is still facing problems with data bias. After all, AI gets its marching orders from human-generated data -- which by its nature is prone to bias, no matter how evolved we humans like to think we are. With the wide adoption of AI, many industries are starting to pay attention to a new form of governance called responsibleor ethical AI.
- Information Technology > Security & Privacy (0.37)
- Banking & Finance > Insurance (0.32)
La veille de la cybersécurité
At least five large companies will introduce "bias bounties" or hacker competitions to identify bias in artificial intelligence (AI) algorithms, predicts the just-released "North American Predictions 2022" from Forrester. Bias bounties are modeled on bug bounties, which reward hackers or coders (often, outside the organizations) who detect problems in security software. In late July, Twitter launched the first major bias bounty and awarded $3,500 to a student who proved that its image cropping algorithm favors lighter, slimmer and younger faces. "Finding bias in machine learning (ML) models is difficult, and sometimes, companies find out about unintended ethical harms once they've already reached the public," wrote Rumman Chowdhury, director of Twitter META, in a blog entry. "We want to change that."
Twitter offers bug bounty to spot AI bias so it can fix its algorithms
Twitter has a new way to rid itself of artificial intelligence bias: pay outsiders to help it find problems. On Friday, the short-message app maker detailed a new bounty competition that offers prizes of up to $3,500 for showing Twitter how its technology incorrectly handles photos. Earlier this year, Twitter confirmed a problem in its automatic photo cropping mechanism, concluding the software favored white people over Black people. The cropping mechanism, which Twitter calls its "saliency algorithm," is supposed to present the most important section of an image when you're scrolling through tweets. Twitter's approach to tackling algorithmic bias -- asking outside experts and observers to study its code and results -- innovates on bug bounties, which have historically been used for reporting security vulnerabilities.
AI experts call for 'bias bounties' to boost ethics scrutiny – Government & civil service news
Experts from the private sector and leading research labs in the US and Europe have joined forces to create a toolkit for turning AI ethics principles into practice. The preprint paper, published last week, advocates paying people for finding risks of bias in artificial intelligence (AI) systems – adapting a model used to check the security of new computer systems, in which hackers are paid'bounties' for identifying weaknesses. The paper also proposes better linking independent third-party auditing operations and government policies to foster a market in regulatory systems, and suggests that governments increase funding for researchers in academia to verify performance claims made by industry. The 80-page paper, Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims, has been put together by AI specialists from 30 organisations including Google Brain, Intel, OpenAI, Stanford University and the Leverhulme Centre for the Future of Intelligence. "In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, there is a need to move beyond [ethics] principles to a focus on mechanisms for demonstrating responsible behaviour," the executive summary reads.
- North America > United States (0.26)
- Europe (0.26)
- Oceania > New Zealand (0.06)
- (2 more...)
- Government (1.00)
- Law > Statutes (0.32)
AI researchers propose 'bias bounties' to put ethics principles into practice
Researchers from Google Brain, Intel, OpenAI, and top research labs in the U.S. and Europe joined forces this week to release what the group calls a toolbox for turning AI ethics principles into practice. The kit for organizations creating AI models includes the idea of paying developers for finding bias in AI, akin to the bug bounties offered in security software. This recommendation and other ideas for ensuring AI is made with public trust and societal well-being in mind were detailed in a preprint paper published this week. The bug bounty hunting community might be too small to create strong assurances, but developers could still unearth more bias than is revealed by measures in place today, the authors say. "Bias and safety bounties would extend the bug bounty concept to AI and could complement existing efforts to better document data sets and models for their performance limitations and other properties," the paper reads.
- Europe (0.25)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- Research Report (0.52)
- Summary/Review (0.37)